Lecture � Hearn, Dangerous ideas, �Putting Minsky and Brooks together�

Greg Detre

Wednesday, April 16, 2003

NE43 playroom

 

Minsky

Brooks

gofai � hmmmm, not really

search � again, not so much

top-down

software

abstract symbols

 

nouvelle ai

behaviour-based, embodied

cognition vs perception/action

physical grounding

robots

bottom-up

 

astonishingly, he�s giving the lecture as though some people won�t have read the SoM�!

symbol is not in the index of SoM

argues that we can follow Brooks� behaviour-based methodology without actually building robots

 

Brooks

argued that the old perception/modelling, planning, task execution, motor control, actuators was too slow for neurons to implement

threw all of that out

cognition is in the eye of the observer � it�s emergent � parallel layers � sensors to actuators, with just a bunch of tasks in the middle

 

argues that Minsky�s Model-6 doesn�t look so different

I�m not so sure � what it seems to come down to is the serial vs parallel � and Minsky�s layers do work in parallel

Push: you�re comparing a Minskian tower of reflection vs Brooks� super-goals

 

reflective � thinking about recent thinking

self-reflective � thinking about self as a larger entity, goals etc.

 

shows his agent-based little monster that goes round a world putting things in boxes � argues that it exhibits some of the Brooksian behaviour-based ideas

like what???

there�s no learning yet

 

I don�t see the kind of k-line he argues for in the suppression module in the subsumption architecture

 

k-line based programming language

 

you�re allowed state in a subsumption architecture � that is, each box has

 

The 5 questions

why should I fear your research?

why should I rejoice in your research?

what should I tell my mum about this?

what�s the most interesting thing you�ve discovered?

what�s the most recent thing you�ve discovered?

 

Q&A

push: do you really think that you can build a SoM programming language? will it be broad enough? it�s kind of analogous to the way you want to use different programming languages for different problems�

e.g. perhaps you might want to build first-order logic in natively to an intelligent system

 

Brooks: didn�t really see SoM and subsumption architecture as being incompatible

 

they reckon that one of the problems of AI is that there aren�t obvious milestones demarcating progress

after all, apparently Bobby Fischer isn�t actually that smart

make the usual, �once we understand how it works, it�s no longer considered intelligent� �

 

Questions

why hasn�t anyone actually built a SoM architecture so far???

Push: refce: carbonell � derivational analogy � directly based on k-line

and it�s had a catalytic role

 

 

does it matter that it doesn�t self-organise???

 

aren�t both systems pretty static??? problem learning/self-organising�